Skip to content

docs(plugin-meilisearch): add guidelines about auto recraw #434

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 62 commits into from

Conversation

JQiue
Copy link
Contributor

@JQiue JQiue commented Apr 14, 2025

Before submitting the PR, please make sure you do the following

  • Read the Contributing Guidelines.
  • Provide a description in this PR that addresses what the PR is solving. If this PR is going to solve an existing issue, please reference the issue (e.g. close #123).

What is the purpose of this pull request?

  • Bug fix
  • New feature
  • Other

Description

Screenshots

Before

After

@coveralls
Copy link

coveralls commented Apr 14, 2025

Pull Request Test Coverage Report for Build 14459797368

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 59.517%

Totals Coverage Status
Change from base Build 14447791065: 0.0%
Covered Lines: 1358
Relevant Lines: 2059

💛 - Coveralls

@@ -153,6 +153,42 @@ When the crawl is complete, MeiliSearch stores the crawled document in the speci

> See <https://www.meilisearch.com/docs/guides/front_end/search_bar_for_docs#scrape-your-content>

### Using Github Action for Automatic Scraping

In the Github repository, go to `Settings` -> `Secrets and variables` -> `Actions` -> `New repository secret` to set `MEILISEARCH_API_KEY` and `MEILISEARCH_HOST_URL`. And name your scraper configuration file `meilisearch-scraper.json` and place it in the root directory of your project.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the Github repository, go to `Settings` -> `Secrets and variables` -> `Actions` -> `New repository secret` to set `MEILISEARCH_API_KEY` and `MEILISEARCH_HOST_URL`. And name your scraper configuration file `meilisearch-scraper.json` and place it in the root directory of your project.
Place your scraper config file somewhere in your project.
Then go to `Settings` -> `Secrets and variables` -> `Actions` in your Github repository. Click `New repository secret` and set `MEILISEARCH_HOST_URL` and `MEILISEARCH_API_KEY` with your meilisearch server address and client key.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change MEILISEARCH_HOST_URL to env instead of secret

@@ -153,6 +153,42 @@ docker run -t --rm \

> 参考 <https://www.meilisearch.com/docs/guides/front_end/search_bar_for_docs#scrape-your-content>

### 使用 Github Action 自动抓取

在 Github 仓库的`Settings` -> `Secrets and variables` -> `Actions` -> `New repository secret`设置`MEILISEARCH_API_KEY`和`MEILISEARCH_HOST_URL`,并将你的爬虫配置文件命名为 `melisearch_scraper.json` 放到项目根目录下。
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
在 Github 仓库的`Settings` -> `Secrets and variables` -> `Actions` -> `New repository secret`设置`MEILISEARCH_API_KEY``MEILISEARCH_HOST_URL`,并将你的爬虫配置文件命名为 `melisearch_scraper.json` 放到项目根目录下
将你的爬虫配置文件放到项目目录下
之后,在你的 Github 仓库前往 `Settings` -> `Secrets and variables` -> `Actions`。点击 `New repository secret` 并将 `MEILISEARCH_HOST_URL``MEILISEARCH_API_KEY` 设置为你的 Meilisearch 服务器地址和客户端密钥。

@Mister-Hope Mister-Hope changed the title update meilisearch docs docs(plugin-meilisearch): add guidelines about auto recraw Apr 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants